34 research outputs found

    In All Likelihood, Deep Belief Is Not Enough

    Full text link
    Statistical models of natural stimuli provide an important tool for researchers in the fields of machine learning and computational neuroscience. A canonical way to quantitatively assess and compare the performance of statistical models is given by the likelihood. One class of statistical models which has recently gained increasing popularity and has been applied to a variety of complex data are deep belief networks. Analyses of these models, however, have been typically limited to qualitative analyses based on samples due to the computationally intractable nature of the model likelihood. Motivated by these circumstances, the present article provides a consistent estimator for the likelihood that is both computationally tractable and simple to apply in practice. Using this estimator, a deep belief network which has been suggested for the modeling of natural image patches is quantitatively investigated and compared to other models of natural image patches. Contrary to earlier claims based on qualitative results, the results presented in this article provide evidence that the model under investigation is not a particularly good model for natural image

    Inferring decoding strategy from choice probabilities in the presence of noise correlations

    Get PDF
    The activity of cortical neurons in sensory areas covaries with perceptual decisions, a relationship often quantified by choice probabilities. While choice probabilities have been measured extensively, their interpretation has remained fraught with difficulty. Here, we derive the mathematical relationship between choice probabilities, read-out weights and noise correlations within the standard neural decision making model. Our solution allows us to prove and generalize earlier observations based on numerical simulations, and to derive novel predictions. Importantly, we show how the read-out weight profile, or decoding strategy, can be inferred from experimentally measurable quantities. Furthermore, we present a test to decide whether the decoding weights of individual neurons are optimal, even without knowing the underlying noise correlations. We confirm the practical feasibility of our approach using simulated data from a realistic population model. Our work thus provides the theoretical foundation for a growing body of experimental results on choice probabilities and correlations

    Optimal Population Coding, Revisited

    Get PDF
    Cortical circuits perform the computations underlying rapid perceptual decisions within a few dozen milliseconds with each neuron emitting only a few spikes. Under these conditions, the theoretical analysis of neural population codes is challenging, as the most commonly used theoretical tool – Fisher information – can lead to erroneous conclusions about the optimality of different coding schemes. Here we revisit the effect of tuning function width and correlation structure on neural population codes based on ideal observer analysis in both a discrimination and reconstruction task. We show that the optimal tuning function width and the optimal correlation structure in both paradigms strongly depend on the available decoding time in a very similar way. In contrast, population codes optimized for Fisher information do not depend on decoding time and are severely suboptimal when only few spikes are available. In addition, we use the neurometric functions of the ideal observer in the classification task to investigate the differential coding properties of these Fisher-optimal codes for fine and coarse discrimination. We find that the discrimination error for these codes does not decrease to zero with increasing population size, even in simple coarse discrimination tasks. Our results suggest that quite different population codes may be optimal for rapid decoding in cortical computations than those inferred from the optimization of Fisher information

    Bayesian Inference for Generalized Linear Models for Spiking Neurons

    Get PDF
    Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate

    Validation of Composite Systems by Discrepancy Propagation

    Full text link
    Assessing the validity of a real-world system with respect to given quality criteria is a common yet costly task in industrial applications due to the vast number of required real-world tests. Validating such systems by means of simulation offers a promising and less expensive alternative, but requires an assessment of the simulation accuracy and therefore end-to-end measurements. Additionally, covariate shifts between simulations and actual usage can cause difficulties for estimating the reliability of such systems. In this work, we present a validation method that propagates bounds on distributional discrepancy measures through a composite system, thereby allowing us to derive an upper bound on the failure probability of the real system from potentially inaccurate simulations. Each propagation step entails an optimization problem, where -- for measures such as maximum mean discrepancy (MMD) -- we develop tight convex relaxations based on semidefinite programs. We demonstrate that our propagation method yields valid and useful bounds for composite systems exhibiting a variety of realistic effects. In particular, we show that the proposed method can successfully account for data shifts within the experimental design as well as model inaccuracies within the used simulation.Comment: 20 pages incl. 10 pages appendi

    Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

    Get PDF
    Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential. We apply our method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spike-triggered covariance analysis that are most informative about the neural response

    Two Sides of One Coin: A Comparison of Clinical and Neurobiological Characteristics of Convicted and Non-Convicted Pedophilic Child Sexual Offenders

    Get PDF
    High prevalence of child sexual offending stand in contradiction to low conviction rates (one-tenth at most) of child sexual offenders (CSOs). Little is known about possible differences between convicted and non-convicted pedophilic CSOs and why only some become known to the judicial system. This investigation takes a closer look at the two sides of "child sexual offending" by focusing on clinical and neurobiological characteristics of convicted and non-convicted pedophilic CSOs as presented in the Neural Mechanisms Underlying Pedophilia and sexual offending against children (NeMUP)*-study. Seventy-nine male pedophilic CSOs were examined, 48 of them convicted. All participants received a thorough clinical examination including the structured clinical interview (SCID), intelligence, empathy, impulsivity, and criminal history. Sixty-one participants (38 convicted) underwent an inhibition performance task (Go/No-go paradigm) combined with functional magnetic resonance imaging (fMRI). Convicted and non-convicted pedophilic CSOs revealed similar clinical characteristics, inhibition performances, and neuronal activation. However, convicted subjects' age preference was lower (i.e., higher interest in prepubescent children) and they had committed a significantly higher number of sexual offenses against children compared to non-convicted subjects. In conclusion, sexual age preference may represent one of the major driving forces for elevated rates of sexual offenses against children in this sample, and careful clinical assessment thereof should be incorporated in every preventive approach

    Bayesianische Methoden zur neuronalen Datenanalyse

    No full text
    Understanding the computations underlying the information processing in the nervous system is one of the major tasks in computational neuroscience. The amount of neural data is rapidly increasing. Hence, we need methods to analyze and interpret this data. Main requirements for these methods are that they can account for the variability observed in the recorded data as well as they can handle uncertainties about the underlying processing. Furthermore, they should be tractable to be applicable to large data sets. Bayesian analysis provides a principled way for incorporating these requirements as it explicitly models the involved uncertainties. In this thesis, we develop feasible Bayesian methods and apply them to simulated as well as real data. We exemplify the use of these methods on three different aspects of neural coding. First, we show how state-of-the-art models can be fitted to recorded data and obtain model based confidence intervals at the same time. Second, we show how probabilistic models can be used to extract the uncertain information about the stimulus on the basis of an observed spike train. Finally, within the framework of maximum entropy modeling, we study joint distribution of spikes and stimuli.Die Charakterisierung der Berechnungen, die der Informationsverarbeitung unseres Nervensystems zugrunde liegen, stellt eine der bedeutendsten Herausforderungen der theoretischen Neurowissenschaft dar. Um die wachsende Menge an neuronalen Daten zu analysieren und zu interpretieren brauchen wir speziell für diesen Zweck entwickelte Methoden. Dabei müssen diese Methoden in der Lage sein sowohl das hohe Maße an Variabilität in den Daten als auch eventueller Unsicherheiten in der Modell-Spezifizierung zu berücksichtigen. Bayesianische Methoden erfüllen diese Anforderungen, da sie die Unsicherheiten explizit modellieren. Gleichzeitig sollten sie jedoch skalierbar sein, sodass sie auch auf Datensätze von realistischer Größe anwendbar sind. In dieser Dissertation wurden solche skalierbaren Methoden entwickelt und sowohl auf simulierten als auch auf echte Daten angewandt. Diese Methoden habe ich benutzt um 3 Aspekte im Zusammenhang mit der neuronalen Kodierung zu analysieren. Zuerst zeige Ich, wie probabilistische Modelle genutzt werden können, um die unsichere Information über den anliegenden Stimulus nur anhand der beobachteten neuronalen Antwort zu extrahieren. In einem zweiten Aspekt konnte Ich mithilfe des Maximum Entropy Ansatzes eine konservative Approximation an die gemeinsame Verteilung von Spikes und Stimuli erhalten. Letztlich stelle ich eine approximative Methode vor, mit der aktuelle Standardmodelle an aufgenommene Daten angepasst werden können, insbesondere mit Berücksichtigung auf die auftretenden Unsicherheiten
    corecore